2,042 research outputs found

    Prototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production

    Full text link
    For efficiency of the large production tasks distributed worldwide, it is essential to provide shared production management tools comprised of integratable and interoperable services. To enhance the ATLAS DC1 production toolkit, we introduced and tested a Virtual Data services component. For each major data transformation step identified in the ATLAS data processing pipeline (event generation, detector simulation, background pile-up and digitization, etc) the Virtual Data Cookbook (VDC) catalogue encapsulates the specific data transformation knowledge and the validated parameters settings that must be provided before the data transformation invocation. To provide for local-remote transparency during DC1 production, the VDC database server delivered in a controlled way both the validated production parameters and the templated production recipes for thousands of the event generation and detector simulation jobs around the world, simplifying the production management solutions.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, 3 figures, pdf. PSN TUCP01

    POOL File Catalog, Collection and Metadata Components

    Full text link
    The POOL project is the common persistency framework for the LHC experiments to store petabytes of experiment data and metadata in a distributed and grid enabled way. POOL is a hybrid event store consisting of a data streaming layer and a relational layer. This paper describes the design of file catalog, collection and metadata components which are not part of the data streaming layer of POOL and outlines how POOL aims to provide transparent and efficient data access for a wide range of environments and use cases - ranging from a large production site down to a single disconnected laptops. The file catalog is the central POOL component translating logical data references to physical data files in a grid environment. POOL collections with their associated metadata provide an abstract way of accessing experiment data via their logical grouping into sets of related data objects.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, 1 eps figure, PSN MOKT00

    Report from the Luminosity Task Force

    Get PDF

    The p-adic local langlands conjecture

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2005.Includes bibliographical references (leaves 46-47).Let k be a p-adic field. Split reductive groups over k can be described up to k- isomorphism by a based root datum alone, but other groups, called rational forms of the split group, involve an action of the Galois group of k. The Galois action on the based root datum is shared by members of an inner class of k-groups, in which one k--isomorphism class is quasi-split. Other forms of the inner class can be called pure or impure, depending on the Galois action. Every form of an adjoint group is pure, but only the quasi-split forms of simply connected groups are pure. A p-adic Local Langlands correspondence would assign an L-packet, consisting of finitely many admissible representations of a p-adic group, to each Langlands parameter. To identify particular representations, data extending a Langlands parameter is needed to make "completed Langlands parameters." Data extending a Langlands parameter has been utilized by Lusztig and others to complete portions of a Langlands classification for pure forms of reductive p- adic groups, and in applications such as endoscopy and the trace formula, where an entire L-packet of representations contributes at once.(cont.) We consider a candidate for completed Langlands parameters to classify representations of arbitrary rational forms, and use it to extend a classification of certain supercuspidal representations by DeBacker and Reeder to include the impure forms.by Christopher D. Malon.Ph.D

    Primary Numbers Database for ATLAS Detector Description Parameters

    Full text link
    We present the design and the status of the database for detector description parameters in ATLAS experiment. The ATLAS Primary Numbers are the parameters defining the detector geometry and digitization in simulations, as well as certain reconstruction parameters. Since the detailed ATLAS detector description needs more than 10,000 such parameters, a preferred solution is to have a single verified source for all these data. The database stores the data dictionary for each parameter collection object, providing schema evolution support for object-based retrieval of parameters. The same Primary Numbers are served to many different clients accessing the database: the ATLAS software framework Athena, the Geant3 heritage framework Atlsim, the Geant4 developers framework FADS/Goofy, the generator of XML output for detector description, and several end-user clients for interactive data navigation, including web-based browsers and ROOT. The choice of the MySQL database product for the implementation provides additional benefits: the Primary Numbers database can be used on the developers laptop when disconnected (using the MySQL embedded server technology), with data being updated when the laptop is connected (using the MySQL database replication).Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 6 pages, 5 figures, pdf. PSN MOKT00

    Development of a fatigue life prediction methodology for welded steel semi-trailer components based on a new criterion

    Get PDF
    This paper presents a procedure developed to predict the fatigue life in components made of steel, based on the mechanical properties of the base material and Thermally Affected Zones (TAZs) owing to welding. The fatigue life cycles of the studied components are obtained based on a certain survival probability provided by a Weibull distribution. This procedure is thought to be applied on semi-trailer components, and therefore it is proposed for the steels that are typically used in its manufacturing. A criterion for the adjustment of the exponent and the stress stroke of the fatigue life curve in welded joints is proposed in which the parameters that define the alternating stress versus the number of cycles to failure (S-N) curve are obtained exclusively from the ratio between the base material yield stress of a given steel and the strength of its Thermally Affected Zone. This procedure is especially useful for steels that do not have a complete characterization of their fatigue parameters. These developments are implemented in a subroutine that can be applied in commercial codes based on Finite Element Method (FEM) to obtain a fatigue life prediction. Finally, a numerical-experimental validation of the developed procedure is carried out by means of a semi-trailer axle bracing support fatigue analysis

    Metadata for ATLAS

    Get PDF
    This document provides an overview of the metadata, which are needed to characterize ATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events)

    Report of the AOD Format Task Force

    Get PDF
    The Analysis Object Data (AOD) are produced by ATLAS reconstruction and are the main input for most analyses. AOD, like the Event Summary Data (ESD, the other main output of reconstruction) are written as POOL files and are readable from Athena, and, to a limited extent, from ROOT. The AOD typical size, processing speed, and their relatively complex class structure and package dependencies, make them inconvenient to use for most interactive analysis. According to the computing model, interactive analysis will be based on Derived Physics Data (DPD), a user-defined format commonly produced from the AOD. As of release 12.0.3 it is common practice to write DPD as Athena-aware Ntuples (AANT) in ROOT. In an effort to organize and standardize AANT, we introduced the Structured Athena-aware Ntuple (SAN), an AANT containing objects that behave, as much as it is allowed by ROOT interpreter limitations, as their AOD counterparts. Recently it was proposed to extend SAN functionality beyond DPD implementation. SAN objects would be used as AOD objects. The TOB formed our task force with the mandate to "perform a technical evaluation of the two proposals, one based upon the existing AOD classes and architecture, the other upon Structured Athena-Aware Ntuples. [...] Criteria for the evaluation should include I/O performance, support for schema evolution, suitability for end user analysis and simplicity.

    POOL development status and production experience

    Get PDF
    The pool of persistent objects for LHC (POOL) project, part of the large Hadron collider (LHC) computing grid (LCG), is now entering its third year of active development. POOL provides the baseline persistency framework for three LHC experiments. It is based on a strict component model, insulating experiment software from a variety of storage technologies. This paper gives a brief overview of the POOL architecture, its main design principles and the experience gained with integration into LHC experiment frameworks. It also presents recent developments in the POOL works areas of relational database abstraction and object storage into relational database management systems (RDBMS) systems

    The ATLAS EventIndex: Full chain deployment and first operation

    Get PDF
    AbstractThe Event Index project consists in the development and deployment of a complete catalogue of events for experiments with large amounts of data, such as the ATLAS experiment at the LHC accelerator at CERN. Data to be stored in the EventIndex are produced by all production jobs that run at CERN or the GRID; for every permanent output file, a snippet of information, containing the file unique identifier and the relevant attributes for each event, is sent to the central catalogue. The estimated insertion rate during the LHC Run 2 is about 80 Hz of file records containing ∌15 kHz of event records. This contribution describes the system design, the initial performance tests of the full data collection and cataloguing chain, and the project evolution towards the full deployment and operation by the end of 2014
    • 

    corecore